18 research outputs found

    Motion detection in normal infants and young patients with infantile esotropia

    Get PDF
    AbstractThe purpose of this study was to investigate asymmetries in detection of horizontal motion in normal infants and children and in patients with infantile esotropia. Motion detection thresholds (% motion signal) were measured in 75 normal infants and in 36 eyes of 27 infants with infantile esotropia (ET), using a forced-choice preferential looking paradigm with random-dot patterns. Absolute motion detection sensitivity and asymmetries in sensitivity for nasalward (N) vs. temporalward (T) directions of motion were compared in normal and patient populations, ranging in age from 1 month to 5 years. In normal infants, N and T thresholds were equivalent under 2.5 months of age, whereas a superiority for monocular detection of N motion was observed between 3.5 and 6.5 months of age. The nasalward advantage gradually diminished to symmetrical T:N performance by 8 months of age, matching that of adults. No asymmetry was observed in 15 normal infants who performed the task binocularly, hence, the asymmetry was not a leftward/rightward bias. In the youngest infantile ET patients tested, at 5 months of age, a nasalward superiority in motion detection was observed and was equivalent to that of same-age normal infants. However, unlike normals, this asymmetry persists in older patients. This greater asymmetry in infantile ET represents worse detection of T than N motion. This is the first report of an asymmetry in motion detection in normal infants across a wide age range. Initially, motion detection is normal in infants with infantile esotropia. Cumulative abnormal binocular experience in these patients may disrupt motion mechanisms

    Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children

    Get PDF
    Evidence from adult studies of deaf signers supports the dissociation between neural systems involved in processing visual linguistic and non-linguistic body actions. The question of how and when this specialization arises is poorly understood. Visual attention to these forms is likely to change with age and be affected by prior language experience. The present study used eye-tracking methodology with infants and children as they freely viewed alternating video sequences of lexical American sign language (ASL) signs and non-linguistic body actions (self-directed grooming action and objectdirected pantomime). In Experiment 1, we quantified fixation patterns using an area of interest (AOI) approach and calculated face preference index (FPI) values to assess the developmental differences between 6 and 11-month-old hearing infants. Both groups were from monolingual English-speaking homes with no prior exposure to sign language. Six-month-olds attended the signer’s face for grooming; but for mimes and signs, they were drawn to attend to the “articulatory space” where the hands and arms primarily fall. Eleven-montholds, on the other hand, showed a similar attention to the face for all body action types. We interpret this to reflect an early visual language sensitivity that diminishes with age, just before the child’s first birthday. In Experiment 2, we contrasted 18 hearing monolingual English-speaking children (mean age of 4.8 years) vs. 13 hearing children of deaf adults (CODAs; mean age of 5.7 years) whose primary language at home was ASL. Native signing children had a significantly greater face attentional bias than non-signing children for ASL signs, but not for grooming and mimes. The differences in the visual attention patterns that are contingent on age (in infants) and language experience (in children) may be related to both linguistic specialization over time and the emerging awareness of communicative gestural acts

    Automaticity of lexical access in deaf and hearing bilinguals: Cross-linguistic evidence from the color Stroop task across five languages

    Get PDF
    The well-known Stroop interference effect has been instrumental in revealing the highly automated nature of lexical processing as well as providing new insights to the underlying lexical organization of first and second languages within proficient bilinguals. The present cross-linguistic study had two goals: 1) to examine Stroop interference for dynamic signs and printed words in deaf ASL-English bilinguals who report no reliance on speech or audiological aids; 2) to compare Stroop interference effects in several groups of bilinguals whose two languages range from very distinct to very similar in their shared orthographic patterns: ASL-English bilinguals (very distinct), Chinese-English bilinguals (low similarity), Korean-English bilinguals (moderate similarity), and Spanish-English bilinguals (high similarity). Reaction time and accuracy were measured for the Stroop color naming and word reading tasks, for congruent and incongruent color font conditions. Results confirmed strong Stroop interference for both dynamic ASL stimuli and English printed words in deaf bilinguals, with stronger Stroop interference effects in ASL for deaf bilinguals who scored higher in a direct assessment of ASL proficiency. Comparison of the four groups of bilinguals revealed that the same-script bilinguals (Spanish-English bilinguals) exhibited significantly greater Stroop interference effects for color naming than the other three bilingual groups. The results support three conclusions. First, Stroop interference effects are found for both signed and spoken languages. Second, contrary to some claims in the literature about deaf signers who do not use speech being poor readers, deaf bilinguals’ lexical processing of both signs and written words is highly automated. Third, cross-language similarity is a critical factor shaping bilinguals’ experience of Stroop interference in their two languages. This study represents the first comparison of both deaf and hearing bilinguals on the Stroop task, offering a critical test of theories about bilingual lexical access and cognitive control

    Effects of iconicity and semantic relatedness on lexical access in american sign language.

    No full text
    corecore